最近有安装了一次hadoop集群,NameNode启动失败,及原因 |
您所在的位置:网站首页 › folder is not accessible › 最近有安装了一次hadoop集群,NameNode启动失败,及原因 |
最近有安装了一次hadoop集群,NameNode启动失败,查看日志,找到以下原因: 遇到的异常1: org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-javoft/dfs/name is in an inconsistent state: storage di rectory does not exist or is not accessible.at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:291)at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:97)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:379)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:353)at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:254)at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:434)at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1153)at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1162)2018-08-04 12:48:43,125 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: org.apache.hadoop.hdfs.server.common.InconsistentFSStateException : Directory /tmp/hadoop-javoft/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:291)at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:97)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:379)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:353)at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:254)at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:434)at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1153)at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1162) 2018-08-04 12:48:43,126 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 发生这个异常的原因及解决办法: 原因: storage directory does not exist or is not accessible,(存储目录不存在或不可访问); 解决办法: 在core-site.xml文件中配置一个一个属性,覆盖掉默认的属性,即可解决上述的问题 hadoop.tmp.dir /opt/soft/hadoop-2.7.6/hadoopdata
遇到的异常2: 2018-08-04 16:11:51,515 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode java.net.BindException: Problem binding to [master:9000] java.net.BindException: Cannot assign requested address; For more details see: http://wiki.apache.org/hadoop/BindException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721) at org.apache.hadoop.ipc.Server.bind(Server.java:484) at org.apache.hadoop.ipc.Server$Listener.(Server.java:690) at org.apache.hadoop.ipc.Server.(Server.java:2379) at org.apache.hadoop.ipc.RPC$Server.(RPC.java:951) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:534) at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:509) at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:351) at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:675) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:648) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:820) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:804) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1516) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1582)Caused by: java.net.BindException: Cannot assign requested address at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.Net.bind(Net.java:425) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:467) ... 13 more2018-08-04 16:11:51,518 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 12018-08-04 16:11:51,526 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 发生这个异常的原因及解决办法: 原因: 可能是hosts的配置的问题(Cannot assign requested address),这里可以查看本机的hosts与自己core-site.xml中的配置是否一致。我的问题也是这样的; 解决办法: 首先通过hostname,查看本机的ip绑定的机器名。 [root@wang ~]# hostname wang 查看core-site.xml文件中的配置的信息 [root@wang ~]# cat $HADOOP_HOME/etc/hadoop/core-site.xml 贴出与主机名匹配的配置: fs.defaultFS hdfs://wang:9000这两个名字一致,就能解决上面遇到的异常。 |
CopyRight 2018-2019 办公设备维修网 版权所有 豫ICP备15022753号-3 |